Deep Residual Autoencoders for Expectation Maximization-Inspired Dictionary Learning

نویسندگان

چکیده

We introduce a neural-network architecture, termed the constrained recurrent sparse autoencoder (CRsAE), that solves convolutional dictionary learning problems, thus establishing link between and neural networks. Specifically, we leverage interpretation of alternating-minimization algorithm for as an approximate Expectation-Maximization to develop autoencoders enable simultaneous training regularization parameter (ReLU bias). The forward pass encoder approximates sufficient statistics E-step solution coding problem, using iterative proximal gradient called FISTA. can be interpreted either network or deep residual network, with two-sided ReLU non-linearities in both cases. M-step is implemented via two-stage back-propagation. first stage relies on linear decoder applied norm-squared loss. It parallels update step learning. second updates by applying loss function includes prior motivated Bayesian statistics. demonstrate image-denoising task CRsAE learns Gabor-like filters, EM-inspired approach biases superior conventional approach. In application recordings electrical activity from brain, realistic spike templates speeds up process identifying times 900x compared algorithms based convex optimization.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Expectation-Maximization for Learning Determinantal Point Processes

A determinantal point process (DPP) is a probabilistic model of set diversity compactly parameterized by a positive semi-definite kernel matrix. To fit a DPP to a given task, we would like to learn the entries of its kernel matrix by maximizing the log-likelihood of the available data. However, log-likelihood is non-convex in the entries of the kernel matrix, and this learning problem is conjec...

متن کامل

Using Expectation-Maximization for Reinforcement Learning

We discuss Hinton’s (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-...

متن کامل

Greedy Deep Dictionary Learning

—In this work we propose a new deep learning tool – deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion – one layer at a time. This requires solving a simple (shallow) dictionary learning problem; the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like s...

متن کامل

Learning Deep State Representations With Convolutional Autoencoders

Advances in artificial intelligence algorithms and techniques are quickly allowing us to create artificial agents that interact with the real world. However, these agents need to maintain a carefully constructed abstract representation of the world around them [9]. Recent research in deep reinforcement learning attempts to overcome this challenge. Mnih et al. [24] at DeepMind and Levine et al. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE transactions on neural networks and learning systems

سال: 2021

ISSN: ['2162-237X', '2162-2388']

DOI: https://doi.org/10.1109/tnnls.2020.3005348